Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Nat Methods ; 21(2): 213-216, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37500758

RESUMO

Quantitative evaluation of image segmentation algorithms is crucial in the field of bioimage analysis. The most common assessment scores, however, are often misinterpreted and multiple definitions coexist with the same name. Here we present the ambiguities of evaluation metrics for segmentation algorithms and show how these misinterpretations can alter leaderboards of influential competitions. We also propose guidelines for how the currently existing problems could be tackled.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador/métodos
2.
Sci Rep ; 13(1): 11270, 2023 Jul 12.
Artigo em Inglês | MEDLINE | ID: mdl-37438376

RESUMO

Controlling chromatography systems for downstream processing of biotherapeutics is challenging because of the highly nonlinear behavior of feed components and complex interactions with binding phases. This challenge is exacerbated by the highly variable binding properties of the chromatography columns. Furthermore, the inability to collect information inside chromatography columns makes real-time control even more problematic. Typical static control policies either perform sub optimally on average owing to column variability or need to be adapted for each column requiring expensive experimentation. Exploiting the recent advances in simulation-based data generation and deep reinforcement learning, we present an adaptable control policy that is learned in a data-driven manner. Our controller learns a control policy by directly manipulating the inlet and outlet flow rates to optimize a reward function that specifies the desired outcome. Training our controller on columns with high variability enables us to create a single policy that adapts to multiple variable columns. Moreover, we show that our learned policy achieves higher productivity, albeit with a somewhat lower purity, than a human-designed benchmark policy. Our study shows that deep reinforcement learning offers a promising route to develop adaptable control policies for more efficient liquid chromatography processing.

3.
PLoS One ; 17(5): e0264241, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35588399

RESUMO

Fluorescence microscopy is a core method for visualizing and quantifying the spatial and temporal dynamics of complex biological processes. While many fluorescent microscopy techniques exist, due to its cost-effectiveness and accessibility, widefield fluorescent imaging remains one of the most widely used. To accomplish imaging of 3D samples, conventional widefield fluorescence imaging entails acquiring a sequence of 2D images spaced along the z-dimension, typically called a z-stack. Oftentimes, the first step in an analysis pipeline is to project that 3D volume into a single 2D image because 3D image data can be cumbersome to manage and challenging to analyze and interpret. Furthermore, z-stack acquisition is often time-consuming, which consequently may induce photodamage to the biological sample; these are major barriers for workflows that require high-throughput, such as drug screening. As an alternative to z-stacks, axial sweep acquisition schemes have been proposed to circumvent these drawbacks and offer potential of 100-fold faster image acquisition for 3D-samples compared to z-stack acquisition. Unfortunately, these acquisition techniques generate low-quality 2D z-projected images that require restoration with unwieldy, computationally heavy algorithms before the images can be interrogated. We propose a novel workflow to combine axial z-sweep acquisition with deep learning-based image restoration, ultimately enabling high-throughput and high-quality imaging of complex 3D-samples using 2D projection images. To demonstrate the capabilities of our proposed workflow, we apply it to live-cell imaging of large 3D tumor spheroid cultures and find we can produce high-fidelity images appropriate for quantitative analysis. Therefore, we conclude that combining axial z-sweep image acquisition with deep learning-based image restoration enables high-throughput and high-quality fluorescence imaging of complex 3D biological samples.


Assuntos
Aprendizado Profundo , Algoritmos , Processamento de Imagem Assistida por Computador , Imageamento Tridimensional/métodos , Microscopia de Fluorescência , Imagem Óptica
4.
Nat Methods ; 18(9): 1038-1045, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34462594

RESUMO

Light microscopy combined with well-established protocols of two-dimensional cell culture facilitates high-throughput quantitative imaging to study biological phenomena. Accurate segmentation of individual cells in images enables exploration of complex biological questions, but can require sophisticated imaging processing pipelines in cases of low contrast and high object density. Deep learning-based methods are considered state-of-the-art for image segmentation but typically require vast amounts of annotated data, for which there is no suitable resource available in the field of label-free cellular imaging. Here, we present LIVECell, a large, high-quality, manually annotated and expert-validated dataset of phase-contrast images, consisting of over 1.6 million cells from a diverse set of cell morphologies and culture densities. To further demonstrate its use, we train convolutional neural network-based models using LIVECell and evaluate model segmentation accuracy with a proposed a suite of benchmarks.


Assuntos
Bases de Dados Factuais , Processamento de Imagem Assistida por Computador/métodos , Microscopia/métodos , Modelos Biológicos , Técnicas de Cultura de Células , Humanos , Redes Neurais de Computação
5.
SLAS Technol ; 26(4): 408-414, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33874798

RESUMO

Machine vision is a powerful technology that has become increasingly popular and accurate during the last decade due to rapid advances in the field of machine learning. The majority of machine vision applications are currently found in consumer electronics, automotive applications, and quality control, yet the potential for bioprocessing applications is tremendous. For instance, detecting and controlling foam emergence is important for all upstream bioprocesses, but the lack of robust foam sensing often leads to batch failures from foam-outs or overaddition of antifoam agents. Here, we report a new low-cost, flexible, and reliable foam sensor concept for bioreactor applications. The concept applies convolutional neural networks (CNNs), a state-of-the-art machine learning system for image processing. The implemented method shows high accuracy for both binary foam detection (foam/no foam) and fine-grained classification of foam levels.


Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Algoritmos , Reatores Biológicos , Processamento de Imagem Assistida por Computador
6.
BMC Bioinformatics ; 20(1): 498, 2019 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-31615395

RESUMO

BACKGROUND: Selecting the proper parameter settings for bioinformatic software tools is challenging. Not only will each parameter have an individual effect on the outcome, but there are also potential interaction effects between parameters. Both of these effects may be difficult to predict. To make the situation even more complex, multiple tools may be run in a sequential pipeline where the final output depends on the parameter configuration for each tool in the pipeline. Because of the complexity and difficulty of predicting outcomes, in practice parameters are often left at default settings or set based on personal or peer experience obtained in a trial and error fashion. To allow for the reliable and efficient selection of parameters for bioinformatic pipelines, a systematic approach is needed. RESULTS: We present doepipeline, a novel approach to optimizing bioinformatic software parameters, based on core concepts of the Design of Experiments methodology and recent advances in subset designs. Optimal parameter settings are first approximated in a screening phase using a subset design that efficiently spans the entire search space, then optimized in the subsequent phase using response surface designs and OLS modeling. Doepipeline was used to optimize parameters in four use cases; 1) de-novo assembly, 2) scaffolding of a fragmented genome assembly, 3) k-mer taxonomic classification of Oxford Nanopore Technologies MinION reads, and 4) genetic variant calling. In all four cases, doepipeline found parameter settings that produced a better outcome with respect to the characteristic measured when compared to using default values. Our approach is implemented and available in the Python package doepipeline. CONCLUSIONS: Our proposed methodology provides a systematic and robust framework for optimizing software parameter settings, in contrast to labor- and time-intensive manual parameter tweaking. Implementation in doepipeline makes our methodology accessible and user-friendly, and allows for automatic optimization of tools in a wide range of cases. The source code of doepipeline is available at https://github.com/clicumu/doepipeline and it can be installed through conda-forge.


Assuntos
Genômica/métodos , Análise de Sequência de DNA/métodos , Software , Francisella tularensis/genética , Genoma Bacteriano , Nanoporos
7.
Faraday Discuss ; 218(0): 268-283, 2019 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-31120463

RESUMO

Modern profiling technologies enable us to obtain large amounts of data which can be used later for a comprehensive understanding of the studied system. Proper evaluation of such data is challenging, and cannot be carried out by bare analysis of separate data sets. Integrated approaches are necessary, because only data integration allows us to find correlation trends common for all studied data sets and reveal hidden structures not known a priori. This improves the understanding and interpretation of complex systems. Joint and Unique MultiBlock Analysis (JUMBA) is an analysis method based on the OnPLS-algorithm that decomposes a set of matrices into joint parts containing variations shared with other connected matrices and variations that are unique for each single matrix. Mapping unique variations is important from a data integration perspective, since it certainly cannot be expected that all variation co-varies. In this work we used JUMBA for the integrated analysis of lipidomic, metabolomic and oxylipins data sets obtained from profiling of plasma samples from children infected with P. falciparum malaria. P. falciparum is one of the primary contributors to childhood mortality and obstetric complications in the developing world, which makes the development of new diagnostic and prognostic tools, as well as a better understanding of the disease, of utmost importance. In the presented work, JUMBA made it possible to detect already known trends related to the disease progression, but also to discover new structures in the data connected to food intake and personal differences in metabolism. By separating the variation in each data set into joint and unique, JUMBA reduced the complexity of the analysis and facilitated the detection of samples and variables corresponding to specific structures across multiple data sets, and by doing this enabled fast interpretation of the studied system. All of this makes JUMBA a perfect choice for multiblock analysis of systems biology data.


Assuntos
Malária/sangue , Algoritmos , Criança , Humanos , Malária/diagnóstico , Malária/parasitologia , Plasmodium falciparum/isolamento & purificação
8.
PLoS One ; 14(3): e0213350, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30917156

RESUMO

Whole-genome sequencing is a promising approach for human autosomal dominant disease studies. However, the vast number of genetic variants observed by this method constitutes a challenge when trying to identify the causal variants. This is often handled by restricting disease studies to the most damaging variants, e.g. those found in coding regions, and overlooking the remaining genetic variation. Such a biased approach explains in part why the genetic causes of many families with dominantly inherited diseases, in spite of being included in whole-genome sequencing studies, are left unsolved today. Here we explore the use of a geographically matched control population to minimize the number of candidate disease-causing variants without excluding variants based on assumptions on genomic position or functional predictions. To exemplify the benefit of the geographically matched control population we apply a typical disease variant filtering strategy in a family with an autosomal dominant form of colorectal cancer. With the use of the geographically matched control population we end up with 26 candidate variants genome wide. This is in contrast to the tens of thousands of candidates left when only making use of available public variant datasets. The effect of the local control population is dual, it (1) reduces the total number of candidate variants shared between affected individuals, and more importantly (2) increases the rate by which the number of candidate variants are reduced as additional affected family members are included in the filtering strategy. We demonstrate that the application of a geographically matched control population effectively limits the number of candidate disease-causing variants and may provide the means by which variants suitable for functional studies are identified genome wide.


Assuntos
Doenças Genéticas Inatas/genética , Variação Genética , Sequenciamento Completo do Genoma , Estudos de Casos e Controles , Neoplasias Colorretais/genética , Feminino , Genes Dominantes , Estudo de Associação Genômica Ampla/estatística & dados numéricos , Geografia , Haplótipos , Humanos , Masculino , Linhagem , Suécia , Sequenciamento Completo do Genoma/estatística & dados numéricos
9.
Phys Med Biol ; 55(15): 4247-60, 2010 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-20616404

RESUMO

The correction for general recombination losses in liquid ionization chambers (LICs) is more complex than that in air-filled ionization chambers. The reason for this is that the saturation charge in LICs, i.e. the charge that escapes initial recombination, depends on the applied voltage. This paper presents a method, based on measurements at two different dose rates in a pulsed beam, for general recombination correction in LICs. The Boag theory for pulsed beams is used and the collection efficiency is determined by numerical methods which are equivalent to the two-voltage method used in dosimetry with air-filled ionization chambers. The method has been tested in experiments in water in a 20 MeV electron beam using two LICs filled with isooctane and tetramethylsilane. The dose per pulse in the electron beam was varied between 0.1 mGy/pulse and 8 mGy/pulse. The relative standard deviations of the collection efficiencies determined with the two-dose-rate method ranged between 0.1% and 1.5%. The dose-rate variations of the general recombination corrected charge measured with the LICs are in excellent agreement with the corresponding values obtained with an air-filled plane parallel ionization chamber.


Assuntos
Radiometria/instrumentação , Íons , Doses de Radiação , Temperamento
10.
Acta Oncol ; 43(8): 727-35, 2004.
Artigo em Inglês | MEDLINE | ID: mdl-15764217

RESUMO

Conformal radiotherapy or intensity modulated radiotherapy (IMRT) commonly leads to a large integral dose in the patient. Electrons would reduce the integral dose but are not suitable for treating deep-seated tumours, owing to their limited penetration. By combining electron and photon beams, the dose distributions may be improved. In this study, the possibility is explored of using a mixture of electron and photon beams for a deep-seated target volume in the head and neck region. Treatment plans were made for five simulated head and neck cancer cases. Mixed electron and photon beam plans (MB) were constructed using a manual iterative procedure. Photon IMRT plans were optimized automatically. Both electron and photon beams were collimated by a computer controlled multi-leaf collimator (MLC). Both methods were able to produce clinically acceptable plans. Criteria for the target dose were met similarly by both as were the criteria for critical organs. The integral dose outside the planning target volume (PTV) showed a tendency to be lower with MB plans compared with photon IMRT plans. A mixed electron and photon technique has the potential to treat deep-seated tumours. It is reasonable to expect that if computerized optimization tools were coupled with the mixed electron and photon beam technique, treatment goals would be more readily achieved than if using solely pure photon IMRT.


Assuntos
Neoplasias de Cabeça e Pescoço/radioterapia , Imagens de Fantasmas , Lesões por Radiação/prevenção & controle , Planejamento da Radioterapia Assistida por Computador/métodos , Radioterapia Conformacional/métodos , Terapia Combinada , Relação Dose-Resposta à Radiação , Elétrons/uso terapêutico , Feminino , Neoplasias de Cabeça e Pescoço/patologia , Humanos , Masculino , Dose Máxima Tolerável , Fótons/uso terapêutico , Dosagem Radioterapêutica , Estudos de Amostragem , Sensibilidade e Especificidade , Estatísticas não Paramétricas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...